information theoretic counterfactual learning
Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback
Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature, since MNAR data are ubiquitous in modern recommender systems. Instead, missing-at-random (MAR) data, namely randomized controlled trials (RCTs), are usually required by most previous counterfactual learning methods. However, the execution of RCTs is extraordinarily expensive in practice. To circumvent the use of RCTs, we build an information theoretic counterfactual variational information bottleneck (CVIB), as an alternative for debiasing learning without RCTs. By separating the task-aware mutual information term in the original information bottleneck Lagrangian into factual and counterfactual parts, we derive a contrastive information loss and an additional output confidence penalty, which facilitates balanced learning between the factual and counterfactual domains. Empirical evaluation on real-world datasets shows that our CVIB significantly enhances both shallow and deep models, which sheds light on counterfactual learning in recommendation that goes beyond RCTs.
- Research Report > Strength High (0.98)
- Research Report > Experimental Study (0.98)
Response for " Information Theoretic Counterfactual Learning from MNAR Feedback "
We thank the reviewers for their time and for their valuable advice to our work. We thank the reviewer for advice to perform evaluation using other metrics used in the literature. We will include these results in the final version of this paper. Liang et al. (2015) considered recommendation from positive feedback alone (implicit data); however, we consider We postulate that the CVIB's weakness in MSE lies in the absence of We believe that randomness in experiments does exist. On the other, Fig.3 in paper shows the 10 runs of MF-CVIB where it Please refer to the response to Reviewer #1 .
Information Theoretic Counterfactual Learning from Missing-Not-At-Random Feedback
Counterfactual learning for dealing with missing-not-at-random data (MNAR) is an intriguing topic in the recommendation literature, since MNAR data are ubiquitous in modern recommender systems. Instead, missing-at-random (MAR) data, namely randomized controlled trials (RCTs), are usually required by most previous counterfactual learning methods. However, the execution of RCTs is extraordinarily expensive in practice. To circumvent the use of RCTs, we build an information theoretic counterfactual variational information bottleneck (CVIB), as an alternative for debiasing learning without RCTs. By separating the task-aware mutual information term in the original information bottleneck Lagrangian into factual and counterfactual parts, we derive a contrastive information loss and an additional output confidence penalty, which facilitates balanced learning between the factual and counterfactual domains.
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)